40 research outputs found

    A Mini Review of Peer-to-Peer (P2P) for Vehicular Communication

    Get PDF
    In recent times, peer-to-peer (P2P) has evolved, where it leverages the capability to scale compared to server-based networks. Consequently, P2P has appeared to be the future distributed systems in emerging several applications. P2P is actually a disruptive technology for setting up applications that scale to numerous concurrent individuals. Thus, in a P2P distributed system, individuals become themselves as peers through contributing, sharing, and managing the resources in a network. In this paper, P2P for vehicular communication is explored. A comprehensive of the functioning concept of both P2P along with vehicular communication is examined. In addition, the advantages are furthermore conversed for a far better understanding on the implementation

    Measuring driver cognitive distraction through lips and eyebrows

    Get PDF
    Cognitive distraction is one of the several contributory factors in road accidents. A number of cognitive distraction detection methods have been developed. One of the most popular methods is based on physiological measurement. Head orientation, gaze rotation, blinking and pupil diameter are among popular physiological parameters that are measured for driver cognitive distraction. In this paper, lips and eyebrows are studied. These new features on human facial expression are obvious and can be easily measured when a person is in cognitive distraction. There are several types of movement on lips and eyebrows that can be captured to indicate cognitive distraction. Correlation and classification techniques are used in this paper for performance measurement and comparison. Real time driving experiment was setup and faceAPI was installed in the car to capture driver's facial expression. Linear regression, support vector machine (SVM), static Bayesian network (SBN) and logistic regression (LR) are used in this study. Results showed that lips and eyebrows are strongly correlated and have a significant role in improving cognitive distraction detection. Dynamic Bayesian network (DBN) with different confidence of levels was also used in this study to classify whether a driver is distracted or not

    Simulation framework for connected vehicles: a scoping review [version 2; peer review: 2 approved]

    Get PDF
    Background: V2V (Vehicle-to-Vehicle) is a booming research field with a diverse set of services and applications. Most researchers rely on vehicular simulation tools to model traffic and road conditions and evaluate the performance of network protocols. We conducted a scoping review to consider simulators that have been reported in the literature based on successful implementation of V2V systems, tutorials, documentation, examples, and/or discussion groups. Methods: Simulators that have limited information were not included. The selected simulators are described individually and compared based on their requirements and features, i.e., origin, traffic model, scalability, and traffic features. This scoping review was reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR). The review considered only research published in English (in journals and conference papers) completed after 2015. Further, three reviewers initiated the data extraction phase to retrieve information from the published papers. Results: Most simulators can simulate system behaviour by modelling the events according to pre-defined scenarios. However, the main challenge faced is integrating the three components to simulate a road environment in either microscopic, macroscopic or mesoscopic models. These components include mobility generators, VANET simulators and network simulators. These simulators require the integration and synchronisation of the transportation domain and the communication domain. Simulation modelling can be run using a different types of simulators that are cost-effective and scalable for evaluating the performance of V2V systems in urban environments. In addition, we also considered the ability of the vehicular simulation tools to support wireless sensors. Conclusions: The outcome of this study may reduce the time required for other researchers to work on other applications involving V2V systems and as a reference for the study and development of new traffic simulators

    Physiological measurement based automatic driver cognitive distraction detection

    Get PDF
    Vehicle safety and road safety are two important issues. They are related to each other and road accidents are mostly caused by driver distraction. Issues related to driver distraction like eating, drinking, talking to a passenger, using IVIS (In-Vehicle Information System) and thinking something unrelated to driving are some of the main reasons for road accidents. Driver distraction can be categorized into 3 different types: visual distraction, manual distraction and cognitive distraction. Visual distraction is when driver's eyes are off the road and manual distraction is when the driver takes one or both hands off the steering wheel and places the hand/s on something that is not related to the driving safety. Cognitive distraction whereas happens when a driver's mind is not on the road. It has been found that cognitive distraction is the most dangerous among the three because the thinking process can induce a driver to view and/or handle something unrelated to the safety information while driving a vehicle. This study proposes a physiological measurement to detect driver cognitive distraction. Features like lips, eyebrows, mouth movement, eye movement, gaze rotation, head rotation and blinking frequency are used for the purpose. Three different sets of experiments were conducted. The first experiment was conducted in a lab with faceLAB cameras and served as a pilot study to determine the correlation between mouth movement and eye movement during cognitive distraction. The second experiment was conducted in a real traffic environment using faceAPI cameras to detect movement on lips and eyebrows. The third experiment was also conducted in a real traffic environment. However, both faceLAB and faceAPI toolkits were combined to capture more features. A reliable and stable classification algorithm called Dynamic Bayesian Network (DBN) was used as the main algorithm for analysis. A few more others algorithms like Support Vector Machine (SVM), Logistic Regression (LR), AdaBoost and Static Bayesian Network (SBN) were also used for comparison. Results showed that DBN is the best algorithm for driver cognitive distraction detection. Finally a comparison was also made to evaluate results from this study and those by other researchers. Experimental results showed that lips and eyebrows used in this study are strongly correlated and have a significant role in improving cognitive distraction detection

    Eye and mouth movements extraction for driver cognitive distraction detection

    No full text
    Cognitive distraction is happened when a driver's mind is off the road. It happened when a driver is looking on the road but his mind is doing a thinking process. It has been found that, cognitive distraction is the most dangerous type of driver distractions. This has been presented in the comparison table and stem plot between Control Experiment result and Task Experiment result. Information from eye movement and mouth movement are obtained using the faceLab cameras and their correlation is discussed here. Two sets of experiment (Control and Task) with 6 participants were completed for this paper. Results were presented in scatter diagram to show the correlation between eye and mouth movements. Stem plot is to show the different result obtained between control and task experiment

    Physiological measurement used in real time experiment to detect driver cognitive distraction

    No full text
    This paper discusses about lips and eyebrows are used to detect driver cognitive distraction by using faceAPI toolkit. A few number of classification algorithms like Support Vector Machine (SVM), Logistic Regression (LR) and Static Bayesian Network (SBN) and Dynamic Bayesian Network (DBN) have been used for accuracy rate comparison

    Bayesian Network

    No full text
    Abstract Driver distractions can be categorized into 3 major parts: visual, cognitive and manual. Visual and manual distraction on a driver can be physically detected. However, assessing cognitive distraction is difficult since it is more of an “internal ” distraction rather than any easily measured “external ” distraction. There are several methods available that can be used to detect driver cognitive. Physiological measurements, performance measurements (primary and secondary tasks) and rating scales are some of the well-known measurements usually used to detect cognitive distraction. This study focused on physiological measurements, specifically on a driver’s eye and mouth movements. Six different participants were involved in our experiment. The duration of the experiment was 8 minutes and 49 seconds for each participant. Eye and mouth movements were obtained using the FaceLAB Seeing Machine cameras and their magnitudes of the r-values were found more than 60% thus proving that they are strongly correlated to each other. 1

    Non-intrusive physiological measurement for driver cognitive distraction: Eye and mouth movement

    No full text
    Driver distractions can be categorized into 3 major parts: visual, cognitive and manual. Visual and manual distraction on a driver can be physically detected. However, assessing cognitive distraction is difficult since it is more of an “internal” distraction rather than any easily measured “external” distraction. There are several methods available that can be used to detect cognitive driver distraction. Physiological measurements, performance measurements (primary and secondary tasks) and rating scales are some of the well-known measurements to detect cognitive distraction. This study focused on physiological measurements, specifically on a driver’s eye and mouth movements. Six different participants were involved in our experiment. The duration of the experiment was 8 minutes and 49 seconds for each participant. Eye and mouth movements were obtained using the FaceLAB Seeing Machine cameras and their magnitudes of the r-values were found more than 60% thus proving that they are strongly correlated to each other

    Non intrusive physiological measurement for driver cognitive distraction detection: Eye and mouth movements

    No full text
    Driver distractions can be categorized into 3 major parts:-visual, cognitive and manual. Visual and manual distraction on a driver can be physically detected. However, assessing cognitive distraction is difficult since it is more of an “internal” distraction rather than any easily measured “external” distraction. There are several methods available that can be used to detect cognitive driver distraction. Physiological measurements, performance measures (primary and secondary tasks) and rating scales are some of the well-known measures to detect cognitive distraction. This study focused on physiological measurements, specifically on a driver's eye and mouth movements. Six different participants were involved in our experiment. The duration of the experiment was 8 minutes and 49 seconds for each participant. Eye and mouth movements were obtained using the FaceLab Seeing Machine cameras and their magnitude of the r-values were found more than 60% thus proving that they are strongly correlated to each other

    Picture superiority effect in authentication systems for the blind and visually impaired on a smartphone platform

    No full text
    Pictures are more likely to be remembered than words or text. For smartphone authentication, graphical password interfaces employing both visual objects and auditory cues are more memorable than textual password interfaces among sighted people because the graphical interface evokes visual imagery in the brain. However, interfaces employing visual imagery have not been studied for the blind and visually impaired. The objective of this research is to demonstrate that graphical password interfaces, designed to evoke visual imagery among blind and visually impaired users, improve the ease of use of smartphone authentication systems. We developed and tested two graphical password systems, BlindLoginV2, which employs object picture superiority effect and AudioBlindLogin, which employs auditory cues to enrich the picture superiority effect. We collected quantitative metrics measuring login speed, configuration time and failure rates immediately after training, 1 h later, 1 day later and 1 week later and qualitative evidence through face-to-face interviews. This study shows that blind and visually impaired users benefit from the picture superiority effect and passwords are more memorable, quicker to key in with greater accuracy as compared to 4-character textual password interfaces. Using the authentication system as an example, we demonstrate that visual imagery can be evoked in blind and visually impaired users through careful design of smartphone interfaces and when paired with additional sensory cues such as audio, can significantly improve the ease-of-use and thereby enhance access among visually impaired users to the rich array of security features available in smartphones
    corecore